-
Notifications
You must be signed in to change notification settings - Fork 53
Update PyTorch to 2.7.1+cpu, Torchvision to 0.22.1+cpu, and Python Requirement to >=3.9 #524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Abukhoyer Shaik <[email protected]>
Lets run CI and all the models manually which are not currently covered under CI. Once all are passing then we can go ahead and merge it. |
Signed-off-by: Abukhoyer Shaik <[email protected]>
Signed-off-by: Abukhoyer Shaik <[email protected]>
@@ -24,7 +24,7 @@ pipeline { | |||
pip install .[test] && | |||
pip install junitparser pytest-xdist && | |||
pip install librosa==0.10.2 soundfile==0.13.1 && #packages needed to load example for whisper testing | |||
pip install --extra-index-url https://download.pytorch.org/whl/cpu timm==1.0.14 torchvision==0.19.1+cpu einops==0.8.1 && #packages to load VLMs | |||
pip install --extra-index-url https://download.pytorch.org/whl/cpu timm==1.0.14 torchvision==0.22.1+cpu einops==0.8.1 && #packages to load VLMs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update readme file of internvl and other VLMs as well with updated versions of these packages.
Where are we on this PR?, Have we executed all the CI test cases and models which are not covered in CI? |
Few Vision-Language Models are remaining to test such as Llama4 and Gemma3, these are getting tested as these are big models getting machine issue sometimes. Although these are present in CI with single or double layer, but I am testing full models locally. |
This PR updates the following dependencies to their latest CPU-only versions:
Additionally, it updates the Python version requirement to:
Reference: https://pytorch.org/get-started/previous-versions/
Testing
Multimodal Example Scripts have been tested successfully, such as Llama4, gemma3, whisper, granite vision, intern, mLlama.
The following Causal models have been tested and verified to work correctly with the updated dependencies:
DeepSeek-R1-Distill-Qwen-32B
EleutherAI/gpt-j-6b
OpenGVLab/InternVL2_5-1B
bigcode/starcoder
bigcode/starcoder2-15b
codellama/CodeLlama-13b-hf
codellama/CodeLlama-34b-hf
codellama/CodeLlama-7b-hf
google/codegemma-2b
google/codegemma-7b
google/gemma-2-27b
google/gemma-2-2b
google/gemma-2-9b
google/gemma-2b
google/gemma-7b
ibm-granite/granite-20b-code-base-8k
ibm-granite/granite-20b-code-instruct-8k
ibm-granite/granite-3.1-8b-instruct
ibm-granite/granite-guardian-3.1-8b
inceptionai/jais-adapted-13b-chat
inceptionai/jais-adapted-7b
lmsys/vicuna-13b-delta-v0
lmsys/vicuna-13b-v1.3
lmsys/vicuna-13b-v1.5
meta-llama/Llama-2-13b-chat-hf
meta-llama/Llama-2-7b-chat-hf
meta-llama/Llama-3.1-8B
meta-llama/Llama-3.2-1B
meta-llama/Llama-3.2-3B
meta-llama/Meta-Llama-3-8B
microsoft/Phi-3-mini-4k-instruct
mistralai/Codestral-22B-v0.1
mistralai/Mistral-7B-Instruct-v0.1
mistralai/Mixtral-8x7B-Instruct-v0.1
mistralai/Mixtral-8x7B-v0.1
mosaicml/mpt-7b
openai-community/gpt2
tiiuae/falcon-40b
Qwen/Qwen2-1.5B-Instruct